23 research outputs found

    Collaborative Programming of Conditional Robot Tasks

    Get PDF
    Conventional robot programming methods are not suited for non-experts to intuitively teach robots new tasks. For this reason, the potential of collaborative robots for production cannot yet be fully exploited. In this work, we propose an active learning framework, in which the robot and the user collaborate to incrementally program a complex task. Starting with a basic model, the robot's task knowledge can be extended over time if new situations require additional skills. An on-line anomaly detection algorithm therefore automatically identifies new situations during task execution by monitoring the deviation between measured- and commanded sensor values. The robot then triggers a teaching phase, in which the user decides to either refine an existing skill or demonstrate a new skill. The different skills of a task are encoded in separate probabilistic models and structured in a high-level graph, guaranteeing robust execution and successful transition between skills. In the experiments, our approach is compared to two state-of-the-art Programming by Demonstration frameworks on a real system. Increased intuitiveness and task performance of the method can be shown, allowing shop-floor workers to program industrial tasks with our framework

    Capability-based Frameworks for Industrial Robot Skills: a Survey

    Get PDF
    The research community is puzzled with words like skill, action, atomic unit and others when describing robots? capabilities. However, for giving the possibility to integrate capabilities in industrial scenarios, a standardization of these descriptions is necessary. This work uses a structured review approach to identify commonalities and differences in the research community of robots? skill frameworks. Through this method, 210 papers were analyzed and three main results were obtained. First, the vast majority of authors agree on a taxonomy based on task, skill and primitive. Second, the most investigated robots? capabilities are pick and place. Third, industrial oriented applications focus more on simple robots? capabilities with fixed parameters while ensuring safety aspects. Therefore, this work emphasizes that a taxonomy based on task, skill and primitives should be used by future works to align with existing literature. Moreover, further research is needed in the industrial domain for parametric robots? capabilities while ensuring safety

    Hand Pose-based Task Learning from Visual Observations with Semantic Skill Extraction

    Get PDF
    Learning from Demonstrations is a promising technique to transfer task knowledge from a user to a robot. We propose a framework for task programming by observing the human hand pose and object locations solely with a depth camera. By extracting skills from the demonstrations, we are able to represent what the robot has learned, generalize to unseen object locations and optimize the robotic execution instead of replaying a non-optimal behavior. A two-staged segmentation algorithm that employs skill template matching via Hidden Markov Models has been developed to extract motion primitives from the demonstration and gives them semantic meanings. In this way, the transfer of task knowledge has been improved from a simple replay of the demonstration towards a semantically annotated, optimized and generalized execution. We evaluated the extraction of a set of skills in simulation and prove that the task execution can be optimized by such means

    Segmentation and Coverage Planning of Freeform Geometries for Robotic Surface Finishing

    Get PDF
    Surface finishing such as grinding or polishing is a time-consuming task, involves health risks for humans and is still largely performed by hand. Due to the high curvatures of complex geometries, different areas of the surface cannot be optimally reached by a simple strategy using a tool with a relatively large and flat finishing disk. In this paper, a planning method is presented that uses a variable contact point on the finishing disk as an additional degree of freedom. Different strategies for covering the workpiece surface are used to optimize the surface finishing process and ensure the coverage of concave areas. Therefore, an automatic segmentation method is developed to find areas with a uniform machining strategy based on the exact tool and workpiece geometry. Further, a method for planning coverage paths is presented, in which the contact area is modeled to realize an adaptive spacing between path lines. The approach was evaluated in simulation and practical experiments on the DLR SARA robot. The results show high coverage for complex freeform geometry and that adaptive spacing can optimize the overall process by reducing uncovered gaps and overlaps between coverage lines

    Unifying Skill-Based Programming and Programming by Demonstration through Ontologies

    Get PDF
    Smart manufacturing requires easily reconfigurable robotic systems to increase the flexibility in presence of market uncertainties by reducing the set-up times for new tasks. One enabler of fast reconfigurability is given by intuitive robot programming methods. On the one hand, offline skill-based programming (OSP) allows the definition of new tasks by sequencing pre-defined, parameterizable building blocks termed as skills in a graphical user interface. On the other hand, programming by demonstration (PbD) is a well known technique that uses kinesthetic teaching for intuitive robot programming, where this work presents an approach to automatically recognize skills from the human demonstration and parameterize them using the recorded data. The approach further unifies both programming modes of OSP and PbD with the help of an ontological knowledge base and empowers the end user to choose the preferred mode for each phase of the task. In the experiments, we evaluate two scenarios with different sequences of programming modes being selected by the user to define a task. In each scenario, skills are recognized by a data-driven classifier and automatically parameterized from the recorded data. The fully defined tasks consist of both manually added and automatically recognized skills and are executed in the context of a realistic industrial assembly environment

    Collaborative programming of robotic task decisions and recovery behaviors

    Get PDF
    Programming by demonstration is reaching industrial applications, which allows non-experts to teach new tasks without manual code writing. However, a certain level of complexity, such as online decision making or the definition of recovery behaviors, still requires experts that use conventional programming methods. Even though, experts cannot foresee all possible faults in a robotic application. To encounter this, we present a framework where user and robot collaboratively program a task that involves online decision making and recovery behaviors. Hereby, a task-graph is created that represents a production task and possible alternative behaviors. Nodes represent start, end or decision states and links define actions for execution. This graph can be incrementally extended by autonomous anomaly detection, which requests the user to add knowledge for a specific recovery action. Besides our proposed approach, we introduce two alternative approaches that manage recovery behavior programming and compare all approaches extensively in a user study involving 21 subjects. This study revealed the strength of our framework and analyzed how users act to add knowledge to the robot. Our findings proclaim to use a framework with a task-graph based knowledge representation and autonomous anomaly detection not only for initiating recovery actions but particularly to transfer those to a robot

    Advanced Myocontrol for Hand and Wrist Prostheses

    No full text
    Myocontrol is the use of a human machine interface based on muscle signals in order to control a robotic or prosthetic device. A challenging problem in research is the simultaneous and proportional (s/p) control of multiple degrees of freedom (DOF). Besides the common sensing technique of surface electromyography (sEMG), force myography (FMG) is additionally enforced to improve the control experience. Machine learning approaches are employed to map the input of different sensor modalities onto continuous control signals of a prostheses. Throughout this work, four experiments have been conducted with the goal to reduce the training time and to improve the control experience of such devices. In the first experiment we showed that for a fusion of both signal modalities the online performance is invariant for different sensor placements on the forearm. The second experiment evaluated the online performance using different machine learning approaches where either one or both signal modalities were employed. The best results were achieved with a combination of both signal types and with FMG only. As part of this work, an existing method called linearly enhanced training (LET) is adapted to the multi-modal sensory input. This method creates artificial training data for combinations of defined hand and wrist actions and dismisses their explicit recording as training data, which usually cannot be achieved by amputees. It follows that the training time is significantly reduced. In the related experiment, data has been gathered from 10 healthy subjects in order to find a generalized set of parameters for LET. Once determined, these parameters can be the basis for LET for new users. In the last experiment, the set of generalized parameters has been used for nine healthy subjects to evaluate the performance of the approach involving LET data. We showed that with LET the subjects performed equally well compared to the approach which required the execution of all combined activations during training time. This qualities LET as a valid extension to existing control methods as the training time is drastically reduced and no combined activations need to be executed. The goal is to use the same set of parameters and algorithm for amputees, which may not be able to produce combined activations during training time

    Unifying Skill-Based Programming and Programming by Demonstration through Ontologies

    No full text
    Smart manufacturing requires easily reconfigurable robotic systems to increase the flexibility in presence of market uncertainties by reducing the set-up times for new tasks. One enabler of fast reconfigurability is given by intuitive robot programming methods. On the one hand, offline, skill-based programming (OSP) allows the definition of new tasks by sequencing pre-defined, parameterizable building blocks termed as skills in a graphical user interface. On the other hand, Programming by Demonstration (PbD) is a well known technique that uses kinesthetic teaching for intuitive robot programming, where this work presents an approach to automatically recognize skills from the human demonstration and parameterize them using the recorded data. The approach further unifies both programming modes of OSP and PbD with the help of an ontological knowledge base and empowers the end user to choose the preferred mode for each phase of the task. In the experiments, we evaluate two scenarios with different sequences of programming modes being selected by the user to define a task. In each scenario, skills are recognized by a data-driven classifier and automatically parameterized from the recorded data. The fully defined tasks consist of both manually added and automatically recognized skills and are executed in the context of a realistic industrial assembly environment

    Identification of Common Force-based Robot Skills from the Human and Robot Perspective

    Get PDF
    Learning from Demonstration (LfD) can significantly speed up the knowledge transfer from human to robot, which has been proven for relatively unconstrained actions such as pick and place. However, transferring contact or force-based skills (contact skills) to a robot is noticeably harder since force and position constraints need to be considered simultaneously. We propose a set of contact skills, which differ in the force and kinematic constraints. In a first user study, several subjects were asked to term a variety of force-based interactions, from which skill names were derived. In a second and third user study, the identified skill names are used to let a test group of subjects classify the shown interactions. To evaluate the skill recognition from the robot perspective, we propose a feature-based classification scheme to recognize such skills with a robotic system in a LfD setting. Our findings prove that humans are able to understand the meaning of the different skills and, using the classification pipeline, the robot is able to recognize the different skills from human demonstrations
    corecore